Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available March 1, 2026
-
Free, publicly-accessible full text available January 1, 2026
-
AI is now a cornerstone of modern dataset analysis. In many real world applications, practitioners are concerned with controlling specific kinds of errors, rather than minimizing the overall number of errors. For example, biomedical screening assays may primarily be concerned with mitigating the number of false positives rather than false negatives. Quantifying uncertainty in AI-based predictions, and in particular those controlling specific kinds of errors, remains theoretically and practically challenging. We develop a strategy called multidimensional informed generalized hypothesis testing (MIGHT) which we prove accurately quantifies uncertainty and confidence given sufficient data, and concomitantly controls for particular error types. Our key insight was that it is possible to integrate canonical cross-validation and parametric calibration procedures within a nonparametric ensemble method. Simulations demonstrate that while typical AI based-approaches cannot be trusted to obtain the truth, MIGHT can be. We apply MIGHT to answer an open question in liquid biopsies using circulating cell-free DNA (ccfDNA) in individuals with or without cancer: Which biomarkers, or combinations thereof, can we trust? Performance estimates produced by MIGHT on ccfDNA data have coefficients of variation that are often orders of magnitude lower than other state of the art algorithms such as support vector machines, random forests, and Transformers, while often also achieving higher sensitivity. We find that combinations of variable sets often decrease rather than increase sensitivity over the optimal single variable set because some variable sets add more noise than signal. This work demonstrates the importance of quantifying uncertainty and confidence—with theoretical guarantees—for the interpretation of real-world data.more » « lessFree, publicly-accessible full text available August 26, 2026
-
The emerging electron microscopy connectome datasets provides connectivity maps of the brains at single cell resolution, enabling us to estimate various network statistics, such as connectedness. We desire the ability to assess how the functional complexity of these networks depends on these network statistics. To this end, we developed an analysis pipeline and a statistic, XORness, which quantifies the functional complexity of these networks with varying network statistics. We illustrate that actual connectomes have high XORness, as do generated connectomes with the same network statistics, suggesting a normative role for functional complexity in guiding the evolution of connectomes, and providing clues to guide the development of artificial neural networks.more » « less
-
Multiple case-controlled studies have shown that analyzing fragmentation patterns in plasma cell-free DNA (cfDNA) can distinguish individuals with cancer from healthy controls. However, there have been few studies that investigate various types of cfDNA fragmentomics patterns in individuals with other diseases. We therefore developed a comprehensive statistic, called fragmentation signatures, that integrates the distributions of fragment positioning, fragment length, and fragment end-motifs in cfDNA. We found that individuals with venous thromboembolism, systemic lupus erythematosus, dermatomyositis, or scleroderma have cfDNA fragmentation signatures that closely resemble those found in individuals with advanced cancers. Furthermore, these signatures were highly correlated with increases in inflammatory markers in the blood. We demonstrate that these similarities in fragmentation signatures lead to high rates of false positives in individuals with autoimmune or vascular disease when evaluated using conventional binary classification approaches for multicancer earlier detection (MCED). To address this issue, we introduced a multiclass approach for MCED that integrates fragmentation signatures with protein biomarkers and achieves improved specificity in individuals with autoimmune or vascular disease while maintaining high sensitivity. Though these data put substantial limitations on the specificity of fragmentomics-based tests for cancer diagnostics, they also offer ways to improve the interpretability of such tests. Moreover, we expect these results will lead to a better understanding of the process—most likely inflammatory—from which abnormal fragmentation signatures are derived.more » « lessFree, publicly-accessible full text available August 26, 2026
-
We propose and study a data-driven method that can interpolate between a classical and a modern approach to classification for a class of linear models. The class is the convex combinations of an average of the source task classifiers and a classifier trained on the limited data available for the target task. We derive the expected loss of an element in the class with respect to the target distribution for a specific generative model, propose a computable approximation of the loss, and demonstrate that the element of the proposed class that minimizes the approximated risk is able to exploit a natural bias–variance trade-off in task space in both simulated and real-data settings. We conclude by discussing further applications, limitations, and potential future research directions.more » « less
-
Why do brains have inhibitory connections? Why do deep networks have negative weights? We propose an answer from the perspective of representation capacity. We believe representing functions is the primary role of both (i) the brain in natural intelligence, and (ii) deep networks in artificial intelligence. Our answer to why there are inhibitory/negative weights is: to learn more functions. We prove that, in the absence of negative weights, neural networks with non-decreasing activation functions are not universal approximators. While this may be an intuitive result to some, to the best of our knowledge, there is no formal theory, in either machine learning or neuroscience, that demonstrates why negative weights are crucial in the context of representation capacity. Further, we provide insights on the geometric properties of the representation space that non-negative deep networks cannot represent. We expect these insights will yield a deeper understanding of more sophisticated inductive priors imposed on the distribution of weights that lead to more efficient biological and machine learning.more » « less
-
Natural intelligences (NIs) thrive in a dynamic world – they learn quickly, sometimes with only a few samples. In contrast, artificial intelligences (AIs) typically learn with a prohibitive number of training samples and computational power. What design principle difference between NI and AI could contribute to such a discrepancy? Here, we investigate the role of weight polarity: development processes initialize NIs with advantageous polarity configurations; as NIs grow and learn, synapse magnitudes update, yet polarities are largely kept unchanged. We demonstrate with simulation and image classification tasks that if weight polarities are adequately set a priori, then networks learn with less time and data. We also explicitly illustrate situations in which a priori setting the weight polarities is disadvantageous for networks. Our work illustrates the value of weight polarities from the perspective of statistical and computational efficiency during learning.more » « less
An official website of the United States government

Full Text Available